![]() energy conservation through memory channel shutdown
专利摘要:
ENERGY CONSERVATION THROUGH MEMORY CHANNEL SHUTDOWN. The present invention relates to a method that includes deciding to enter a lower power state and, , turn off a memory channel in a computer system, whereupon, later, other memory channels in the computer system, remain active so that the computer remains operational while the memory channel is turned off. 公开号:BR112014015441B1 申请号:R112014015441-4 申请日:2011-12-22 公开日:2021-05-25 发明作者:Murugasamy K. Nachimuthu;Mohan J. Kumar 申请人:Intel Corporation; IPC主号:
专利说明:
Field of Invention [001] This invention refers to the area of computer systems. More particularly, this invention relates to an apparatus and a method for implementing a multi-level memory hierarchy. Description of Related Art 1. Current Memory and Storage Embodiments [002] One of the current limiting factors for computer innovation is memory and storage technology. In conventional computer systems, system memory (also known as main memory, primary memory, executable memory) is typically implemented by dynamic random access memory (DRAM). DRAM-based memory consumes power even when reading or writing is not occurring, as it needs to keep the internal capacitors constantly charged. DRAM-based memory is volatile, meaning that data stored in memory is lost once power is removed. Conventional computer systems also rely on multiple levels of cache to improve performance. A cache is high-speed memory placed between the processor and system memory to service memory access requests faster than could be done from system memory. Such caches are usually implemented with static random access memories (SRAM). Cache management protocols can be used to ensure that the most frequently accessed data and instructions are stored within one of the cache levels, thereby reducing the number of memory access transactions and thus improving memory. performance. [003] With regard to bulk storage (also known as secondary storage or disk storage), conventional bulk storage devices include magnetic media (hard disk drives), optical media (compact disk, the CD ), holographic media, and/or large-volume storage flash memory (solid state drives (SSDs), removable flash drives, etc.). These storage devices are generally considered input/output (I/O) devices because they are accessed by the processor through multiple I/O adapters, which implement multiple I/O protocols. These I/O adapters and I/O protocols consume a considerable amount of power and can have a considerable impact on the integrated circuit area and platform form factor. Portable or mobile devices (such as laptops, netbooks, tablets, personal digital assistants (PDAs), digital media players, portable gaming devices, digital cameras, mobile phones, smartphones, feature phones, etc.) that have a limited battery life when are not connected to a permanent power supply, may contain high-volume storage devices (such as the Embedded Multimedia Card (eMMC) and Secure Digital (SD) card), which are typically coupled to the processor via low-power interconnects and I/O controllers, in order to face the low energy availability in the active and passive states. [004] With respect to firmware memory (such as boot memory (also known as BIOS flash)), a conventional computer system typically uses flash memory devices to store persistent system information, which is read frequently but rarely. (or never) receive recordings. For example, the initial instructions executed by a processor to initialize important system components during a boot process (Basic Input and Output System [BIOS] images) are typically stored in a flash memory device. Flash memory devices currently available on the market are generally limited in speed (such as 50 MHz). This speed is further reduced by the overhead of read protocols (eg 2.5 MHz). In order to speed up BIOS execution speed, conventional processors typically cache a portion of BIOS code during the Pre-Extensible Firmware Interface (PEI) phase of the boot process. Processor cache size imposes a restriction on the size of the BIOS code used in phase PEI (also known as “PEI BIOS code.” B. Phase-Change Memory (PCM) and related technologies [005] Phase change memory (PCM), sometimes also called phase change random access memory (PRAM or PCRAM), PCME, Ovonic Unified Memory, or Chalcogenide RAM (C-RAM), is one type non-volatile computer memory that exploits the unique behavior of chalcogenide glass. As a result of the heat produced by the passage of electric current, chalcogenide glass can switch between two states: crystalline and amorphous. Recent versions of PCM can achieve two additional distinct states. [006] PCM provides greater performance than flash because the PCM memory element can be switched more quickly, recording (switching bits individually to 1 or 0) can be done without the need to previously erase an entire block of cells, and degradation from writes is slower (a PCM device can survive approximately 100 million write cycles; PCM degradation is due to thermal expansion during programming, migration of metals (and other materials) and other mechanisms). Brief Description of Drawings [007] The following description and complementary drawings are used to illustrate embodiments of the invention. In the drawings: [008] Figure 1 illustrates a cache and system memory organization, according to embodiments of the invention; [009] Figure 2 illustrates a hierarchy of memory and storage employed in embodiments of the invention; [0010] Figure 3 illustrates a computing memory system with a DRAM section of system memory and a PCMS section of system memory; [0011] Figure 4 shows a methodology for turning off a memory channel; [0012] Figure 5 shows a methodology to reactivate a memory channel; [0013] Figure 6 shows the memory power state table to be used by a power management system; [0014] Figure 7 shows the components to implement the shutdown or reactivation of a memory channel. Detailed Description [0015] In the following description, several specific details, such as logic implementations, operation codes, means of specifying operands, implementations of partitioning / sharing / duplication of resources, types and interrelationships of system components and partitioning/logical integration options are presented in order to provide a more complete understanding of the present invention. It will be evident, however, to those skilled in the art that the invention can be practiced without such specific details. In other instances, control structures, gate-level circuits, and complete sequences of software instructions were not presented in detail, so as not to make the invention difficult to understand. Experts in the field, with the descriptions presented, will be able to implement the right features without undue experimentation. [0016] References in the specification to "an embodiment", "example of embodiment", etc., indicate that embodiments of the described invention may contain certain features, structures or features, but not all embodiments necessarily contain the certain features, structures or features. Furthermore, such phrases do not necessarily refer to the same embodiment. Additionally, when a particular feature, structure or feature is described in connection with an embodiment, it will be accepted that it will be within the knowledge of the expert in the field to realize such feature, structure or feature, in connection with other embodiments, whether or not explicitly described. [0017] In the following description and claims, the terms "coupled" and "connected" and their derivations may be used. It should be understood that these terms are not to be considered synonymous. "Coupled" is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, cooperate or interact with each other. "Connected" is used to indicate the establishment of communication between two or more elements that are coupled to each other. [0018] Texts and blocks enclosed in parentheses (eg, dashes, hyphens, periods, or period) may occasionally be used to illustrate optional features or components that add features to embodiments of the invention. However, such notation should not be interpreted as indicating that these are the only options or optional features/components, and/or that solid-edged blocks are not optional in certain embodiments of the invention. Introduction [0019] Memory capacity and performance requirements continue to increase, with an increasing number of processor cores and new usage models such as virtualization. Furthermore, the power and cost of memories has become a considerable component in the power and price, respectively, of electronic systems. [0020] Some embodiments of the invention solve the above challenges by intelligently subdividing the performance and capacity needs between memory technologies. The focus of this methodology is on delivering performance with a relatively small amount of high-speed memory, such as DRAM, while implementing all system memory with a considerably cheaper, higher-density, nonvolatile random memory type (NVRAM) ). The embodiments of this invention, described below, define platform embodiments that enable hierarchical organizations of memory subsystems for the use of NVRAM. The use of NVRAM in the memory hierarchy also allows for new uses, such as more boot space and large-volume storage implementations, as described in detail later. [0021] Figure 1 illustrates a cache and system memory organization, according to embodiments of the invention. Specifically, Figure 1 illustrates a memory hierarchy including a set of processor internal caches 120, "near memory" functioning as a remote memory cache 121, which may contain both internal (106) and external 107-109 caches and "memory "remote memory" 122. One particular type of memory that can be used as "remote memory" in some embodiments of the invention is non-volatile random access memory ("NVRAM"). As such, an overview of NVRAM is presented below, followed by a review of far memory and near memory. Non-Volatile Random Access Memory (NVRAM) [0022] There are many possible technology options for NVRAM, including PCM, Phase Change Memory and Switch (PCMS), the latter being a more specific implementation of the earlier, byte-addressable persistent memory (BPRAM), universal memory, Ge2Sb2Te5, programmable metallization cell (PMC, programmable metallization cell), resistive memory (RRAM), RESET cell (amorphous), SET cell (crystalline), PCME, Ovshinsky memory, ferroelectric memory (also known as polymer and poly memory (N-vinylcarbazol)), ferromagnetic memory (also known as Spintronics, SPRAM [spin-transfer torque RAM], STRAM [spin tunneling RAM], magnetoresistive memory, magnetic memory, magnetic random access memory [MRAM]) and Semiconductor-oxide - nitride-oxide-semiconductor (SONOS, also known as dielectric memory). [0023] For use in the memory hierarchy described in this application, NVRAM has the following characteristics: [0024] Retains content even if power is removed, similar to FLASH memory used in solid state disks (SSD), and unlike SRAM and DRAM, which are volatile; [0025] lower power consumption when idle compared to memories such as SRAM and DRAM; [0026] Random access similar to that of SRAM and DRAM (also known as random addressing); [0027] rewriteable and erasable at a lower level of granularity (for example, at the byte level) than FLASH found on SSDs (which can only be rewritten and erased one "block" at a time; at least 64 Size Kbytes for FLASH NOR and 16 Kbytes for FLASH NAND); [0028] usable as a system memory and allocates all or part of the system memory address space; [0029] capable of being coupled to the processor through a bus, through a transactional protocol (a protocol compatible with transaction identifiers (IDs) to distinguish several transactions, so that these transactions can be completed out of order ) and allowing access at a level of granularity small enough to support operations on NVRAM with a system memory (for example, cache line sizes like 64 bytes or 128 bytes). For example, the bus can be a memory bus (for example, a DDR bus, such as DDR3, DDR4, etc.) over which a transactional protocol is passed, as opposed to the non-transactional protocol normally used. As another example, the bus can be such that a transactional protocol (a native transactional protocol) such as a PCI express bus (PCIE), a desktop management interface type bus will normally run on it. , DMI), or any other type of bus, using a transactional protocol and a sufficiently small transaction load size (for example, cache line size of 64 or 128 bytes); and [0030] one or more of the following: [0031] higher write speed than non-volatile memory/storage technologies such as FLASH; [0032] very high read speed (faster than FLASH and close to or equivalent to DRAM read speeds); [0033] directly writable (instead of requiring erasing (overwrite with 1s) before writing data as in FLASH memories used in SSDs); and/or [0034] orders of magnitude (eg 2 or 3) higher in write persistence before failing (higher than ROM and FLASH used on SSDs). [0035] As mentioned above, unlike FLASH memories, which must be rewritten and erased one "block" at a time, the level of granularity at which NVRAM is accessed in any implementation may depend on the memory controller and the bus. memory, or another type of bus to which the NVRAM is attached. For example, in some implementations where NVRAM is used as system memory, NVRAM can be accessed at the granularity of a cache line (eg a 64 byte or 128 byte cache line), despite the inherent ability to be accessed at one-byte granularity, because the cache line is at the level at which the memory subsystem accesses memory. Thus, if NVRAM is implemented within a memory subsystem, it can be accessed at the same level of granularity as DRAM (eg, "near memory") used in the same memory subsystem. Even so, the level of granularity of access to NVRAM by the memory controller and by the memory bus or other type of bus is smaller than the block size used by Flash and the access size of the controller and bus subsystem. I/O [0036] NVRAM can also incorporate wear leveling algorithms to account for the fact that storage cells, at the far memory level, start to wear out after a certain number of write accesses, especially where a considerable number of writes can occur in a system memory implementation. Because high-cycle count blocks are more likely to wear out in this way, wear leveling distributes writes across far-flung memory cells, swapping addresses from high-cycle count blocks to low-cycle count blocks. Note that, generally, most address exchanges are transparent to application programs_because they are handled by hardware, lower-level software (a low-level driver or operating system), or a combination of both. 8. Remote memory [0037] The remote memory 122 of some embodiments of the invention is implemented with NVRAM, but is not necessarily limited to any particular memory technology. Remote memory 122 can be distinguished from other memory/instruction and data storage technologies in its characteristics and/or its applications in the memory/storage hierarchy. For example, remote memory 122 is different from: [0038] static random access memory (SRAM), which can be used in internal caches of level 0 and 1 processors 101a-b, 102a-b, 103a-b, 103a-b, and 104a-b, dedicated to each one of the processor cores 101-104, respectively, and in the lowest-level cache (LLC) 105 shared by the processor cores; [0039] dynamic random access memory (DRAM) configured as a cache 106 internal to the processor 100 (on the same integrated circuit as the processor 100) and/or configured as one or more caches 107-109 external to the processor (in the same housing or in a different housing than processor 100); and [0040] FLASH memory/magnetic disk/optical disk applied to large volume storage (not shown); and [0041] memories such as FLASH and other read-only memories (ROM) applied as firmware memories (which may be related to ROM, BIOS Flash and/or boot TPM Flash). [0042] Remote memory 122 can be used as storage of instructions and data that are directly addressed by a processor 100 and is capable of sufficiently keeping up with the speed of the processor 100, as opposed to FLASH/magnetic disk/optical disk applied as large volume storage. In addition, as discussed above and described in detail below, remote memory 122 can be placed on a memory bus and can communicate directly with a memory controller which, in turn, communicates directly with processor 100. [0043] Distance memory 122 can be combined with other instruction and data storage technologies (eg, DRAM) to form hybrid memories (also known as collocated PCM and DRAM; first-level memory and second-level memory; FLAM (FLASH and DRAM)). Note that at least some of the above technologies, including PCM/PCMS, can be used for bulk storage instead of or in addition to system memory, and do not need to be randomly accessible, byte-addressed, or directly addressed by the processor when applied in this way. [0044] For ease of explanation, the terms NVRAM, PCM, PCMS, and far memory may be used interchangeably in the following discussion. However, it should be understood, as already explained, that other technologies can also be used for remote memory. In addition, this NVRAM is not limited in use as far-away memory. An Example OF SYSTEM Memory Allocation SCHEME [0045] Figure 1 illustrates how the various cache levels 101 to 109 are configured in relation to a system physical address space (SPA) 116 to 119 in the embodiments of the invention. As already mentioned, this embodiment involves a processor 100 with one or more cores 101-104, with each core having its own highest-level cache (L0) 101a-104a and a mid-level cache (MLC) (L1) 101b -104b. Processor 100 also contains a shared LLC 105. The operation of these various levels of cache is well known and will not be described in detail here. [0046] Caches 107-109 illustrated in Figure 1 can be dedicated to a certain range of system memory addresses or a group of non-contiguous address ranges. For example, cache 107 is dedicated to functioning as a Memory Side Cache (MSC) for system memory address range #116 and caches 108 and 109 are dedicated to functioning as MSCs for non-overlapping portions of the address ranges of system memory #2 117 and #3 118. This latter implementation can be used on systems where the PSA space used by the processor 100 is interleaved into an address space used by the 107109 caches (when configured as MSCs). In one embodiment, this latter address space is called the memory channel address space (MCA). In one embodiment, internal caches 101a to 106 perform caching operations for the entire SPA space. [0047] System memory, as used herein, is memory visible by and/or directly addressed by software running on processor 100; although 101a-109 caches can function transparently to software, in the sense that they do not form a directly addressed part of the system's address space, but the cores can also support the execution of instructions to allow the software to present some control (delivery, policies, hints, etc.) to some or all caches. The subdivision of system memory into regions 116-119 may be performed manually, as part of a system implementation process (e.g., by a system designer) and/or may be performed automatically by software. [0048] In one embodiment, system memory regions 116-119 are implemented with far-flung memory (e.g., PCM) and, in some embodiments, near-memory is configured as system memory. Memory address range #4 represents a range of addresses that is implemented with a high-speed memory, such as DRAM, which can be nearby memory configured in system memory mode (instead of cache memory). [0049] Figure 2 illustrates a memory/storage hierarchy 140 and several configurable modes of operation of near memory 144 and NVRAM, according to embodiments of the invention. The memory/storage hierarchy 140 has several levels, including (1) a cache level 150 which can contain 150A processor caches (for example, 101A-105 caches in Figure 1) and optionally near memory as a cache for 150B far memory ( in certain modes of operation), (2) a system memory tier 151 that includes remote memory 151B (eg, NVRAM as PCM) and nearby memory functioning as system memory 151A, (3) a bulk storage tier 152 which may include flash/magnetic/optical bulk storage 152B and/or NVRAM 152A bulk storage (e.g., a portion of NVRAM 142); and (4) a firmware memory level 153 that can include BIOS flash 170 and/or BIOS NVRAM 172 and, optionally, Trusted Platform Module (TPM) NVRAM 173. [0050] As indicated, near memory 144 can be implemented to function in a mode where it acts as system memory 151A and occupies a portion of SPA space (sometimes referred to as "direct access" near memory mode); and one or more additional modes of operation, such as scratch memory 192 or as recording buffer 193. In some embodiments, nearby memory may be partitioned, where each partition may concurrently operate in a different mode among the compatible ones; and various embodiments may be compatible with the realization of partitions (sizes, modes) by hardware (fuses, pins), firmware and/or software (such as by means of a set of programmable range loggers within the MSC 124 controller, within of which, for example, several binary codes can be stored to identify each mode of the partition). [0051] As seen in Figure 2, the B 191 system address space is used to present an implementation when all or a part of nearby memory is assigned a part of the system address space. In this embodiment, system B address space 191 represents the range of system address space assigned to near memory 151A, and system A address space 190 represents the range of system address space assigned to NVRAM 174. [0052] When operating in the near memory direct access mode, all or part of the near memory such as a 151A system memory is directly visible to the software and forms part of the SPA space. Such memory can be completely under software control. Such a method can create a non-uniform memory access memory domain (NUMA) for software, where it obtains higher performance from nearby memory 144 relative to system memory NVRAM 174. For purposes of example, not by way of limitation, such usage can be employed for certain types of high-performance computing (HPC) and graphics applications, which require very fast access to certain data structures. [0053] Figure 2 also illustrates that a portion of NVRAM 142 can be used as firmware memory. For example, the NVRAM portion of BIOS 172 can be used to store BIOS images (instead of or in addition to storing BIOS information in flash BIOS 170). The NVRAM portion of BIOS 172 can be a part of the SPA space and is directly addressed by software running on processor cores 101-104, while BIOS 170 flash is addressed through I/O subsystem 115 As another example, a 173 Trusted Platform Module (TPM) NVRAM can be used to protect sensitive system information (such as encryption keys). [0054] Thus, as indicated, NVRAM 142 can be implemented to function in several different ways, including as far memory 151B (when near memory 144 is present or operating in direct access mode); large volume storage NVRAM 152A; BIOS NVRAM 172; and TPM 173 NVRAM. [0055] The choice of system memory and bulk storage devices may depend on the type of electronic platforms on which embodiments of the invention are employed. For example, on personal computers, tablets, notebooks, smartphones, mobile phones, personal digital assistants (PDA), portable media players, portable gaming devices, game consoles, digital cameras, switches, hubs, routers, TV converters, digital video recorders and other devices with relatively small bulk storage needs, bulk storage can be implemented only with NVRAM 152A, or with NVRAM 152A combined with flash/magnetic/optical storage 152B. [0056] On other electronic platforms with relatively large bulk storage needs (such as large-scale servers) bulk storage can be implemented with magnetic storage (such as hard disks) or any combination of magnetic, optical, storage, holographic, flash memory and 152A NVRAM. In this case, the storage system hardware and/or software may implement various intelligent storage allocation techniques to locate persistent blocks of code and program data between FM 151B storage/NVRAM 152A storage and a large amount of storage 152B flash/magnetic/optical in an efficient or otherwise useful manner. [0057] For example, in one embodiment, a high-powered server is configured with a nearby memory (such as DRAM), a PCMS device, and a bulk storage device for large volumes of persistent storage. In one embodiment, a notebook is configured with nearby memory and a PCMS device, which fulfills the role of both remote memory and bulk storage device. An embodiment of a home or office desktop computer is similar to that of a notebook, but may also contain one or more magnetic storage devices to provide large amounts of persistent storage resources. [0058] An embodiment of a tablet or cell phone device relies on PCMS memory but possibly no nearby memory and no added bulk storage (to maintain cost and energy savings). However, a tablet/phone can be configured with a large volume storage device such as a pen drive or PCMS. [0059] Multiple other device types can be configured as described above. For example, portable media players and/or personal digital assistants (PDAs) can be configured similarly to the tablets/phones described above, game consoles can be configured similarly to desktops or laptops. Other devices, which can be similarly configured, include digital cameras, routers, TV set-top boxes, digital video recorders, televisions, and automobiles. An Example System Memory Allocation Scheme [0060] Figure 3 illustrates a memory controller 300 of a computer system with their respective interfaces 301_1 to 301_8 for several memory channels (such as DDR channels) 302_1 to 302_8 where each channel is capable of supporting one or more DIMM cards (or that is, one or more DIMM cards can be plugged into the channel), and Figures 4 and 5 illustrate methods for controlling the power consumption of a computer system by disabling/enabling a memory channel and its corresponding DIMM cards. For the sake of simplicity, eight memory channels are illustrated, but those skilled in the art will understand that the instructions contained herein can be applied to systems with varying amounts of memory channels. [0061] According to the method in Figure 4, for example, a decision must be made (for example, through intelligent power management software such as ACPI) to put a computer system into a lower performance state by disabling a memory channel that is working. On the other hand, according to the method in Figure 5, a decision needs to be made to put the computer system into a higher performance state by activating a memory channel that is not working. [0062] Recall that in Figure 1 some embodiments of computer systems with DRAM and NVRAM memory components can reserve a first part of the system memory addresses for DRAM, and a second part of the system memory addresses for NVRAM. That is, in the "near memory method acting as system memory", the addressed system memory can contain both DRAM (see for example Figure 2, near memory as system memory 151A) and PCMS (see for example Figure 2, remote memory 151B implemented as system memory NVRAM 174). [0063] According to one embodiment, referring again to Figure 3, each of the memory channels 302_1 to 302_8, is received a part or unique segment of the computer system memory addresses, consistent with the available storage space on the channel. from memory. The storage space available on a memory channel, in turn, is a function of the number of DIMM cards connected to the memory channel and the storage density of memory devices on the DIMM cards. [0064] According to another embodiment, a first part 303 of the memory channels (and therefore a first part/corresponding segment of the system memory address space) is reserved for DRAM DIMMs, and a second part 304 of the DRAM channels. memory (and therefore a corresponding second/remaining segment of the system memory address space) is reserved for PCMS DIMMs. [0065] According to that particular embodiment, the DRAM storage space 303 does not act as cache for the PCMS storage space 304. Instead, the system memory space is configured to store "access time critical" information. " or at least frequently used program code instructions) in the 303 DRAM storage space, and "non-critical or less access time critical" information (such as data or at least data with low access frequency) in the PCMS 304 storage space. [0066] Thus, the operating system and/or the virtual machine monitor that is running on the computer's CPU, allocates the system memory address space in a manner consistent with this method. For example, frequently used program code instructions are given address space corresponding to these memory channels with DRAM DIMMs, and less used ones are given address space corresponding to these memory channels with PCMS DIMMS. In various embodiments, the content stored at each address, whether DRAM or PCMS, is a fixed-length word (eg, 64-bit data word or 128-bit data words) called a "cache line". For convenience, the terms "content" or "cache line" will be used interchangeably in the following discussion to refer to information stored in a system memory address. [0067] Referring to the methodology of Figure 4, when applied to the computer system described above, disabling a memory channel to enter a low-performance state includes disabling the memory channel DIMM cards and their memory devices. corresponding memory, and activating a memory channel to enter a high-performance state involves activating the memory channel DIMM cards and their corresponding memory devices. [0068] Particularly efficient tuning of power consumption and performance can be achieved when channels having DRAM DIMM cards are chosen to be enabled and disabled. Because DRAM devices are faster and consume more power than PCMS devices, dropping to a lower performance state by unplugging a DRAM memory channel should significantly reduce computer system performance and power consumption. Likewise, elevating to a higher performance state by activating a DRAM memory channel should considerably increase computer system performance and power consumption. [0069] However, memory management is a matter of concern. Specifically, when a DRAM memory channel is disabled or enabled, the system memory address space must be efficiently reconfigured to accommodate the change in available DRAM memory space. This includes "moving" the contents of the DRAM channel to be turned off to another location in system memory. According to the methodology in Figure 4, the operating system and/or a virtual machine and/or virtual machine monitor and/or power management component of any of these (hereafter referred to as "system software") maintains controlling the 402 utilization of the virtual addresses that are allocated to the DRAM to build an understanding of which virtual addresses are being accessed most frequently and/or which virtual addresses are being accessed less frequently. [0070] As is known in the area, generally a system software is designed to reference the virtual addresses and the basic hardware is responsible for converting the virtual addresses into the corresponding physical addresses of the system's resident memory. [0071] When a decision is made to disable a DRAM memory channel, the system software efficiently reconfigures the DRAM address space so that the most frequently used virtual addresses remain assigned to the DRAM address space, and a group of lesser used DRAM virtual addresses, approximately or identical in number to the physical addresses held by the DRAM memory channel to be disconnected, are reassigned to the PCMS address space. The resulting reassignment of basic physical addresses (including the migration of the content of the most frequently used addresses, the DRAM channel being turned off to a DRAM channel that should remain active, and the migration of the content of the least used DRAM addresses into space (PCMS) necessarily affects the conversions of virtual addresses to physical addresses mentioned above. [0072] Normally, the translation lookaside buffer (TLB) 305 resident in the central processing unit (CPU) or "processor" 306, functions as a virtual address cache for the translations into physical addresses. A TLB is well known in the field, but a brief overview of its role and function is worth mentioning. A TLB contains multiple translation entries (TEs), each TE identifying a unique physical address for a specific virtual address, also known as address translation. Usually the conversion itself is set to the granularity of the memory page. Therefore, a virtual address TE contains the physical address of its corresponding memory page in computer system system memory. [0073] The TLB was designed to contain the set of TEs (up to the size of the TLB) whose associated virtual addresses were called most recently by the running program code. Since each of the virtual addresses, called by the executing program code, identifies a specific program code instruction or a specific data item to be used, multiple processor architectures can contain both an instruction TLB and a data TLB . [0074] In the case of an instruction TLB, during program code execution, a next virtual address for a next instruction is fetched and a query is made in the instruction TLB to obtain the correspondence between an instruction's virtual address and the virtual addresses within the TEs of the instruction TLB. In a common method, the lowest-order bits of the virtual address are not used in the search, so the search parameter (that is, the highest-order bits of the virtual address) essentially matches the address of a page of memory. virtual. If a match is found, the physical address found in the TE, in possession of the corresponding virtual address, identifies a specific memory page in system memory where the desired instruction can be found. [0075] If a matching virtual address is not found in the instruction TLB (an instruction TLB "error"), the processor table lookup hardware will fetch the correct TE from system memory. The physical address within the TE obtained from system memory identifies the memory page in system memory where the next instruction can be found. Typically a copy of the TE taken from system memory is also loaded into the instruction TLB and a less recently used TE is dumped from the instruction TLB. The original TE obtained from system memory remains in system memory. [0076] The data TLB, including system operation in response to the data TLB error, works very similarly to the one described above, except that the virtual address points to a desired data item and the physical address found in the TE Desired identifies a page in system memory where the desired data resides. [0077] It is important to note that the set of TEs that contain all the virtual addresses for the conversions in physical addresses (both instruction and data), to the set of virtual addresses that an operational program (such as an application and/or a virtual machine) can call during its operation, are in special storage 307, known as "TE storage", which is held in system memory. In one embodiment, storage TE 307 for an operating program is loaded into system memory as part of loading the operating program into memory for execution. When several programs are running simultaneously on the computer, in one embodiment, a TE storage is maintained in system memory for each operating program. In a later embodiment, all TE stores, and therefore all TEs, are held in a special segment of system DRAM memory on a DRAM channel that cannot be disabled. [0078] When the system memory space is reconfigured to account for the deactivation (or activation) of a memory channel, as part of a power management decision to change the performance state of the computer, said movement of the DRAM memory contents create a need, for each cache line that has to be migrated and therefore has a "new" physical address, to update its corresponding TE to reflect the location of the new physical address 404. The specific TEs that must to be updated are: i) TEs of the most frequently used DRAM addresses, whose contents should be migrated from a DRAM channel that will be disabled, to a DRAM channel that will not be disabled; and ii) TEs of less frequently used DRAM addresses, whose corresponding contents have to be migrated to PCMS address space. [0079] In an unlikely situation, all less frequently used DRAM addresses will be on the specific DRAM channel that should be disabled. In this case, DRAM addresses on any other DRAM channel will be affected by channel shutdown (ie, all contents of the DRAM channel that are turned off will be migrated to PCMS storage). Thus, in this unlikely scenario, only the TEs of the DRAM addresses of the channel being deactivated will be modified to reflect a new physical address in PCMS memory. [0080] In a most likely scenario, some of the most frequently used DRAM addresses reside on the memory channel being powered off, and some of the least frequently used addresses are on the remaining channels that will not be deactivated. In one embodiment, the number of least frequently used DRAM addresses identified for PCMS storage migration is the same (or approximately the same) as the number of addresses supported by the DRAM channel to be powered down. This essentially corresponds to equalizing the number of DRAM addresses marked for migration to the PCMS storage space with the number of DRAM addresses that are "lost" through the DRAM channel shutdown. [0081] With this method, the number of frequently used DRAM addresses on the DRAM channel to be deactivated, and whose content should be migrated to a new DRAM address (specifically, an address on another DRAM channel that will remain active) must be the same that the number of less frequently used DRAM addresses on the DRAM channels that will remain active and whose content must be migrated to a new PCMS address (ie, an address in PCMS storage). In this way, the contents of the former can replace the contents of the latter in the DRAM space that must remain active after the channel is turned off. That is, the content of the frequently used DRAM addresses on the DRAM channel that will be disabled can be rewritten to the least used DRAM addresses on the DRAM channel that will not be disabled. [0082] Thus, according to the method in Figure 4, the cache lines of the least used DRAM addresses, both those of the DRAM channel to be disabled and those of the other DRAM channel, are read from the DRAM and written to the storage space PCMS 403. The physical address information of their respective pages, as maintained in their corresponding TEs (in their respective TE stores, because many application software can be affected by channel disabling) in system memory, is modified to reflect their respective new PCMS 404 addresses. [0083] Then, the "newly freed" DRAM addresses on the active DRAM channels are re-populated with the cache lines of the most used DRAM addresses on the DRAM channel that will be turned off 405. [0084] In this way, each least used DRAM address that is released on a remaining active channel will be rewritten with a cache line from another more frequently used DRAM address, on the channel that will be disabled. The TE register in system memory for each of the memory pages of the most frequently used DRAM addresses that are being migrated from the channel to be deactivated to the remaining active DRAM channels will be modified to reflect its new physical address location 406 Note that each new physical address corresponds to an address that was previously identified as being less frequently used. [0085] In one embodiment, as part of the original activation of the system (well before the decision to disable the DRAM channel), a section of PCMS 308 storage is reserved to receive an "entire DRAM channel" of cache lines, in case of a DRAM channel shutdown. Here, no active information is stored in the PCMS section, unless and until a DRAM channel is turned off, when a number of cache lines equivalent, in total data size, to the storage capacity of the DRAM channel will be disabled, will be loaded into the section from the 308 DRAM storage space. Upon updating their corresponding TEs, in their respective TE stores, these cache lines are later accessed from the PCMS section, to support the operation of the program. Several of these sections of PCMS system memory may be pre-reserved, as described above, to support a system capable of running as long as several DRAM channels are disabled 407. [0086] As part of the system memory reset process described above, any copy of a modified TE residing in a TLB is invalidated. Note that system operation may also be suspended during system memory replenishment. [0087] After a DRAM channel is disabled, referring to Figure 5, a further decision needs to be made by system software to enter a higher performance state 502, which involves enabling a currently DRAM channel. inactive 502. In this case, the cache lines residing in the said section 308 of the PCMS 304 system memory reserved for the storage of the migrated DRAM content will be "re-migrated back" to the DRAM channel which will be activated 503 The physical address component of the TEs for the corresponding memory pages of all such cache lines will be modified in TE storage 307 to reflect its new storage on the newly activated DRAM channel 504. Again, system operation may be suspended to implement DRAM channel activation, cache line migration, TE modification and invalidation of any copies of the modified TEs residing in a TLB. [0088] Figure 6 illustrates a software table structure hierarchy that can be used, for example, by intelligent power management software (such as ACPI) to support the computer system's ability to enable/disable a memory channel as described above. As seen in Figure 6, the memory 600 power state table hierarchy contains a header 601, a set of commands 602, definitions of one or more power nodes 603_1 to 603_X, and the characteristics of the various power states 604_1 to 604_Y supported by the region of system memory represented by memory power state table 600. [0089] A single instance of a memory power state table can be instantiated, for example, for any of these: an entire system memory, a technology-specific region of a system memory (as a first table instantiated for a DRAM section of system memory and a second table instantiated for a PCMS section of system memory), etc. Header information 601 contains information specific to the portion of system memory by which the memory power state is instantiated. In one embodiment, header information 601 contains: i) a signature to the table; ii) the extent of the entire table, including all its components 602, 603, 604; iii) the version number of the table structure; iv) a checksum for the table; v) an OEM identifier; vi) an ID of a vendor of a utility that created the table; and, vii) an ID of a revision of the utility that created the table. [0090] The command set contains basic commands to read and write information to and from the power state table and its multiple components. [0091] The memory power state table identifies the number (X) of power node structures 603 listed in the table and contains, or at least provides, references to the power node structures 603_1 through 603_X themselves. In one embodiment, a separate power node structure is created for each memory channel in the portion of memory that table 600 represents, capable of supporting multiple power states, any of which can be entered via programming. For example, with a brief reference to Figure 3, if the memory 600 power state table represents a DRAM 303 portion of a system memory, having both DRAM 303 and PCMS 304 sections, a node structure can be instantiated. of independent power for each DRAM memory channel 302_1 to 302_4. [0092] As noted in Figure 6, each energy node structure, such as the 603_1 energy node structure, includes: [0093] i) a power node structure identifier 605; [0094] ii) the address range 606 of the system memory address space that the energy node structure represents; and, [0095] iii) the 607 power state in which the specific section of system memory represented by the 606 power node structure is currently in. Note that the current memory power state 607 corresponds to one of the power states from the set of memory power states 604_1 through 604_Y, defined by power state table 600 as a whole. [0096] In an embodiment where the energy node structure 603_1 represents a DRAM channel, the address range 606 of the energy node structure corresponds to a range of virtual system memory addresses whose conversion to the address space Physical addresses correspond to physical addresses compatible with the channel. The aforementioned deactivation and reactivation of a memory channel, however, can "scramble" a contiguous range of virtual addresses across multiple memory channels. In other words, at least after a sequence of channel shutdowns, a single memory channel can support multiple non-contiguous sections of virtual address space. [0097] This virtual address space fragmentation over its corresponding physical storage resources, can be aggravated whenever a memory channel shutdown sequence is initiated. Thus, in one embodiment, multiple additional energy node structures 603_1_2 to 603_1_R can be instantiated to the same memory channel, where each instance of energy node structure corresponds to a different range of virtual address spaces that is effectively , stored in the channel. Multiple instances of energy node structures can be effectively "tied" together in a manner representative of their corresponding virtual address ranges being stored in the same memory channel by inserting the same energy node structure identifier element 605 into each one of them. Any action taken against this specific identifier will, of course, call all energy node structures 603_1 and 603_1_2 to 603_1_R that have the identifier. [0098] During a channel shutdown or rehabilitation transition, the energy node structure instances of a specific memory channel must be modified to reflect the "new" virtual addresses of which the channel supports storage. This can involve any of the following: adding new or deleting existing energy node structure instances, instantiated to the memory channel, and/or modifying the 606 virtual address ranges specified in existing instances of energy node structure, instantiated to the memory channel. As such, according to multiple embodiments, when a memory channel is turned off or back on, the TE entries of a TE store in a modified system memory, for affected virtual address to physical address translation, and address range elements 606 of energy node structures used by energy management software, as well as the number of structures itself, can be modified. [0099] When a DRAM memory channel is turned off, as described in the length above the current power state 607, the memory channel power node structure instances correspond to a low power state in which the memory channel is turned off and not being actively used. In this power state, application of clock signals, strobing and/or updating of DIMM cards in the memory channel may be suspended. Power supply voltages applied to DIMM cards in the channel can also be reduced as part of a set of low power state characteristics. [00100] If the memory channel is reactivated after being turned off, the current embodiment of power state 607 of the memory channel power node structure instance will change to another power state that corresponds with the active memory channel. In that case, any off signals or supply voltages will be reapplied. [00101] For the above power state transitions, drive software embedded in an operating system that is associated with a memory channel, can control the shutdown of multiple signals and provide voltage when entering a high power state. [00102] In one embodiment, the current definition of memory power state 607 includes a pointer 610 that points to the specific power state among the set of memory states 604_1 to 604_Y compatible with table 600, where the structure of power node 603_1 is currently on. In one embodiment, each power state setting between settings 604_1 through 604_Y defines the average power consumed (and/or the max to min power consumption rates) when a system memory component is in that power state . In another embodiment, all power state definitions also include a definition of the amount of time spent when transitioning to or from the power state (e.g., an exit latency). [00103] Note that while the above discussion has been primarily focused on the complete shutdown of a single channel, the above teachings can be applied to both the wider and the finer grains. Specifically, the above teachings can be applied to a sequence in which more than one channel is turned off as part of the same power state sequence (for example, in cases where there is interleaving by memory channels) or, to a sequence in which less than a full memory channel is turned off (such as turning off just one DRAM chip). [00104] Figure 7 shows the architecture of a software that includes a 710 power management software, a 700 power management table, as discussed above, and a 711 memory management software component 712, which keeps track of the data. software virtual address usage rates, and updates the TE information of migrating cache lines. [00105] The 710 power management software function (such as ACPI) decides that a low system power state is required. With prior knowledge, through the availability of the 700 power management table, that the system memory is organized into multiple power nodes 703_1 to 703_N, which supports multiple states, the 701 power management software issues a 702 command , identifying this 703_2 power node and a new low power state it must report. [00106] Here, each of the power nodes 703_1 to 703_N correspond to different DRAM (and perhaps PCMS) memory channels that reside in the computer system, each having multiple DIMM cards connected to it. In this example, power node 703_2 corresponds to a DRAM memory channel. The 712 memory management software for the underlying computer system memory channels is called 713 in response to the 712 command and recognizes the specific DRAM memory channel that must be turned off. [00107] The 712 memory management software includes a 711 tracking component that tracks which virtual addresses allocated to DRAM are most frequently used and which are least used. Subtracting and losing DRAM memory storage capacity by turning off the DRAM memory channel, you get a small new DRAM capacity. The most frequently used DRAM virtual addresses (and/or other important virtual address access times are signaled) and consistent with this capability are identified to be kept in the DRAM. The remainder corresponds to a collection of lesser used DRAM virtual addresses, the size of which corresponding content is equivalent to the capacity of the memory channel to be turned off. [00108] A 714 migration component handles proper migration of the cache line, as discussed above. Here again, migration includes reading cache lines associated with less used virtual addresses from DRAM (those on the memory channel to be turned off and those on another DRAM memory channel), and writing to memory reserved space. PCMS. Cache lines associated with frequently used virtual addresses located on the memory channel to be shut down are transferred to locations of remaining active memory channels that have been vacated by the transfer into the PCMS memory space. TEs in the virtual address TE store with new physical address due to migration are updated 715 and any TEs in a TLB are invalidated. [00109] The address range information of the power nodes in power table 700 representing the remaining active DRAM memory channels is then updated 716 to reflect their new virtual address range. This can include creating or deleting instances of energy nodes that are identified as part of the same energy node. [00110] The power channel is turned off, for example, by device driver software 717 for the memory channel, which can stop or reduce multiple clock/strobe signals on the channel (and perhaps further reduce the voltage supplied on the memory channel. memory). [00111] The power channel can be re-enabled following a similar flow, but where the transfer control component 714 migrates cache lines previously stored in the PCMS, to the re-enabled memory channel. [00112] Although the above methodologies have been shown to be performed largely, if not entirely, in software, any of the multiple steps discussed above can be performed in hardware or with a combination of hardware and software.
权利要求:
Claims (17) [0001] 1. Method, characterized in that it comprises: performing the following, in response to a desire to enter a low-performance state: transferring cache lines stored in a dynamic random access memory (DRAM) memory channel to other channels of DRAM memory and active non-volatile system memory PCMS type, the active non-volatile system memory being accessible by the granularity of the cache line and from which it is possible to run directly where said cache lines associated with used virtual addresses more often are transferred to other DRAM memory channels and, said cache lines associated with less frequently used virtual addresses are transferred to said PCMS memory; and, turning off said DRAM memory channel. [0002] 2. Method according to claim 1, characterized in that said cache lines transferred to PCMS memory are transferred to storage space previously reserved in said PCMS memory, to cache lines that have migrated from storage in DRAM in response to the DRAM memory shutdown operation. [0003] 3. Method according to claim 2, characterized in that it further comprises performing the following, in response to the desire to enter a higher performance state: reactivating said DRAM memory channel; transferring said cache lines migrated to PCMS memory in said DRAM memory channel. [0004] 4. Method according to claim 1, characterized in that it further comprises updating the conversions from virtual to physical addresses for said cache lines. [0005] 5. Method according to claim 4, characterized in that said virtual to physical address conversions are stored in storage space of said other DRAM memory channels. [0006] 6. Method according to claim 5, characterized in that it further comprises the invalidation of any physical to virtual address conversion for said cache lines found in a translation look aside buffer. [0007] 7. Method, characterized by the fact that it comprises: tracking the use of virtual addresses of a software program; and, performing the following, in response to a desire to enter a low-performance state: reading cache lines from DRAM memory channels associated with less-used virtual addresses of said software; active non-volatile system of the PCMS memory type, said active non-volatile system memory being accessible in the granularity of the cache line and from which said software program is able to execute directly; write cache lines associated with the virtual addresses used more often one of said DRAM memory channels at locations of the other one or more remaining DRAM channels; said locations previously store at least some of said cache lines recorded in active non-volatile system memory; and, turning off a said DRAM memory channel. [0008] 8. Method according to claim 7, characterized in that said cache lines recorded in the PCMS memory are recorded in the storage space of the PCMS memory, previously reserved for storing cache lines, transferred from the DRAM storage space. [0009] 9. Method according to claim 8, characterized in that it further comprises performing the following, in response to the desire to enter a higher performance state: reactivation of said one DRAM memory channel; transfer of said lines of cache written in PCMS memory to said DRAM memory channel. [0010] 10. Method according to claim 7, characterized in that it further comprises updating virtual to physical address conversions for said cache lines associated with less used virtual addresses, and said cache lines associated with virtual addresses most frequently used. [0011] 11. Method according to claim 10, characterized in that it further comprises the invalidation of any virtual to physical address conversions, found in a translation look aside buffer for said cache lines associated with less used virtual addresses, and the those cache lines associated with the most frequently used virtual addresses. [0012] 12. Machine-readable media, and non-transient signal energy, containing the program code which, when processed by a central processing unit of a computer system, causes the execution of a method, characterized in that said method comprises: execution of the following, in response to a desire to enter a low-performance state: Transferring cache lines stored in one memory channel from dynamic random access memory (DRAM) to other channels of DRAM memory and non-volatile system memory activates dio PCMS type, being the active non-volatile system memory accessible by the granularity of the cache line and from which the software program code can execute directly from where the cache lines associated with the most frequently used virtual addresses are transferred to other DRAM memory channels and those of cache lines associated with less frequently used virtual addresses are migrated to memory active non-volatile system; and, turning off said DRAM memory channel. [0013] 13. Machine-readable media according to claim 12, characterized in that said cache lines transferred to PCMS memory are transferred to storage space previously reserved in PCMS memory, to cache lines that have migrated from storage in DRAM in response to the DRAM memory shutdown operation. [0014] 14. Machine readable media according to claim 13, characterized in that said method further comprises performing the following, in response to the desire to enter a higher performance state: reactivating said DRAM memory channel; transferring said migrated cache lines to PCMS memory in said DRAM memory channel. [0015] 15. Machine-readable media according to claim 12, characterized in that said method further comprises updating the virtual-to-physical address conversions of the cache lines. [0016] 16. Machine readable media according to claim 15, characterized in that the virtual to physical address conversions are stored in storage space of other DRAM memory channels. [0017] 17. Machine-readable media according to claim 16, characterized in that it further comprises the invalidation of any physical to virtual address conversion for said cache lines found in a translation look aside buffer.
类似技术:
公开号 | 公开日 | 专利标题 US10521003B2|2019-12-31|Method and apparatus to shutdown a memory channel US11200176B2|2021-12-14|Dynamic partial power down of memory-side cache in a 2-level memory hierarchy US10719443B2|2020-07-21|Apparatus and method for implementing a multi-level memory hierarchy US11054876B2|2021-07-06|Enhanced system sleep state support in servers using non-volatile random access memory US20180341588A1|2018-11-29|Apparatus and method for implementing a multi-level memory hierarchy having different operating modes US9817758B2|2017-11-14|Instructions to mark beginning and end of non transactional code region requiring write back to persistent storage US9317429B2|2016-04-19|Apparatus and method for implementing a multi-level memory hierarchy over common memory channels US9286205B2|2016-03-15|Apparatus and method for phase change memory drift management US20140229659A1|2014-08-14|Thin translation for system access of non volatile semicondcutor storage as random access memory
同族专利:
公开号 | 公开日 KR101572403B1|2015-11-26| BR112014015441A2|2017-06-13| GB2513748A|2014-11-05| US10521003B2|2019-12-31| GB2513748B|2020-08-19| CN104115132B|2018-02-06| TW201331941A|2013-08-01| CN104115132A|2014-10-22| BR112014015441A8|2017-07-04| KR101761044B1|2017-07-24| US20140143577A1|2014-05-22| US9612649B2|2017-04-04| TWI614752B|2018-02-11| GB201411390D0|2014-08-13| WO2013095559A1|2013-06-27| DE112011106032T5|2014-12-04| KR20140098221A|2014-08-07| US20170206010A1|2017-07-20| KR20150138404A|2015-12-09|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US22008A|1858-11-09|Hoisting-machine | US122011A|1871-12-19|Improvement in keepers for door-latches | US711129A|1901-11-18|1902-10-14|Robert Shedenhelm|Animal-shears.| US711207A|1901-12-14|1902-10-14|Martin V Grogan|Circular handsaw.| US5912839A|1998-06-23|1999-06-15|Energy Conversion Devices, Inc.|Universal memory element and method of programming same| US6917999B2|2001-06-29|2005-07-12|Intel Corporation|Platform and method for initializing components within hot-plugged nodes| US7000102B2|2001-06-29|2006-02-14|Intel Corporation|Platform and method for supporting hibernate operations| US6732241B2|2001-09-07|2004-05-04|Hewlett-Packard Development Company, L.P.|Technique for migrating data between storage devices for reduced power consumption| US7493438B2|2001-10-03|2009-02-17|Intel Corporation|Apparatus and method for enumeration of processors during hot-plug of a compute node| US7117311B1|2001-12-19|2006-10-03|Intel Corporation|Hot plug cache coherent interface method and apparatus| US7673090B2|2001-12-19|2010-03-02|Intel Corporation|Hot plug interface control method and apparatus| US7003658B2|2002-02-21|2006-02-21|Inventec Corporation|Method for user setup of memory throttling register in north bridge via BIOS to save power| US20040028066A1|2002-08-06|2004-02-12|Chris Quanbeck|Receiver architectures with dynamic symbol memory allocation and methods therefor| TW200410255A|2002-12-10|2004-06-16|Comax Semiconductor Inc|A memory device with power-saving mode and an electrics device with the memory device| US7350087B2|2003-03-31|2008-03-25|Intel Corporation|System and method of message-based power management| TW564991U|2003-04-25|2003-12-01|Sunplus Technology Co Ltd|Power-saving static memory control circuit| US7376775B2|2003-12-29|2008-05-20|Intel Corporation|Apparatus, system, and method to enable transparent memory hot plug/remove| US7475174B2|2004-03-17|2009-01-06|Super Talent Electronics, Inc.|Flash / phase-change memory in multi-ring topology using serial-link packet interface| US8046488B2|2004-05-21|2011-10-25|Intel Corporation|Dynamically modulating link width| US7337368B2|2004-06-07|2008-02-26|Dell Products L.P.|System and method for shutdown memory testing| US7480808B2|2004-07-16|2009-01-20|Ati Technologies Ulc|Method and apparatus for managing power consumption relating to a differential serial communication link| US7562202B2|2004-07-30|2009-07-14|United Parcel Service Of America, Inc.|Systems, methods, computer readable medium and apparatus for memory management using NVRAM| US7590918B2|2004-09-10|2009-09-15|Ovonyx, Inc.|Using a phase change memory as a high volume memory| US7246224B2|2004-09-27|2007-07-17|Intel Corporation|System and method to enable platform personality migration| US20070005922A1|2005-06-30|2007-01-04|Swaminathan Muthukumar P|Fully buffered DIMM variable read latency| US8010764B2|2005-07-07|2011-08-30|International Business Machines Corporation|Method and system for decreasing power consumption in memory arrays having usage-driven power management| TWM286985U|2005-08-22|2006-02-01|Regulus Technologies Co Ltd|Memory module with smart-type power-saving and fault-tolerance| US7376037B1|2005-09-26|2008-05-20|Lattice Semiconductor Corporation|Programmable logic device with power-saving architecture| US8145732B2|2005-11-21|2012-03-27|Intel Corporation|Live network configuration within a link based computing system| US7496742B2|2006-02-07|2009-02-24|Dell Products L.P.|Method and system of supporting multi-plugging in X8 and X16 PCI express slots| US7600078B1|2006-03-29|2009-10-06|Intel Corporation|Speculatively performing read transactions| US7913147B2|2006-05-08|2011-03-22|Intel Corporation|Method and apparatus for scrubbing memory| US7756053B2|2006-06-30|2010-07-13|Intel Corporation|Memory agent with error hardware| US7493439B2|2006-08-01|2009-02-17|International Business Machines Corporation|Systems and methods for providing performance monitoring in a memory system| JP4209906B2|2006-08-02|2009-01-14|株式会社日立製作所|Low power consumption memory management method and computer using the method| WO2008040028A2|2006-09-28|2008-04-03|Virident Systems, Inc.|Systems, methods, and apparatus with programmable memory control for heterogeneous main memory| US7774556B2|2006-11-04|2010-08-10|Virident Systems Inc.|Asymmetric memory migration in hybrid main memory| US8344475B2|2006-11-29|2013-01-01|Rambus Inc.|Integrated circuit heating to effect in-situ annealing| US7908501B2|2007-03-23|2011-03-15|Silicon Image, Inc.|Progressive power control of a multi-port memory device| US20080270811A1|2007-04-26|2008-10-30|Super Talent Electronics Inc.|Fast Suspend-Resume of Computer Motherboard Using Phase-Change Memory| US8429493B2|2007-05-12|2013-04-23|Apple Inc.|Memory device with internal signap processing unit| US8347005B2|2007-07-31|2013-01-01|Hewlett-Packard Development Company, L.P.|Memory controller with multi-protocol interface| JP2009211153A|2008-02-29|2009-09-17|Toshiba Corp|Memory device, information processing apparatus, and electric power controlling method| US20090313416A1|2008-06-16|2009-12-17|George Wayne Nation|Computer main memory incorporating volatile and non-volatile memory| US8639874B2|2008-12-22|2014-01-28|International Business Machines Corporation|Power management of a spare DRAM on a buffered DIMM by issuing a power on/off command to the DRAM device| US8161304B2|2009-01-20|2012-04-17|Microsoft Corporation|Power management for large memory subsystems| US8331857B2|2009-05-13|2012-12-11|Micron Technology, Inc.|Wireless interface to program phase-change memories| US8250282B2|2009-05-14|2012-08-21|Micron Technology, Inc.|PCM memories for storage bus interfaces| US8180981B2|2009-05-15|2012-05-15|Oracle America, Inc.|Cache coherent support for flash in a memory hierarchy| US8504759B2|2009-05-26|2013-08-06|Micron Technology, Inc.|Method and devices for controlling power loss| US20100306453A1|2009-06-02|2010-12-02|Edward Doller|Method for operating a portion of an executable program in an executable non-volatile memory| US9123409B2|2009-06-11|2015-09-01|Micron Technology, Inc.|Memory device for a hierarchical memory architecture| KR20100133710A|2009-06-12|2010-12-22|삼성전자주식회사|Memory system and code data loading method therof| US20110047316A1|2009-08-19|2011-02-24|Dell Products L.P.|Solid state memory device power optimization| US8578138B2|2009-08-31|2013-11-05|Intel Corporation|Enabling storage of active state in internal storage of processor rather than in SMRAM upon entry to system management mode| US8296496B2|2009-09-17|2012-10-23|Hewlett-Packard Development Company, L.P.|Main memory with non-volatile memory and DRAM| US9041720B2|2009-12-18|2015-05-26|Advanced Micro Devices, Inc.|Static image retiling and power management method and circuit| US8914568B2|2009-12-23|2014-12-16|Intel Corporation|Hybrid memory architectures| US8407516B2|2009-12-23|2013-03-26|Intel Corporation|Controlling memory redundancy in a system| US8612809B2|2009-12-31|2013-12-17|Intel Corporation|Systems, methods, and apparatuses for stacked memory| US20110161592A1|2009-12-31|2011-06-30|Nachimuthu Murugasamy K|Dynamic system reconfiguration| US20110179311A1|2009-12-31|2011-07-21|Nachimuthu Murugasamy K|Injecting error and/or migrating memory in a computing system| US8621255B2|2010-02-18|2013-12-31|Broadcom Corporation|System and method for loop timing update of energy efficient physical layer devices using subset communication techniques| US20110208900A1|2010-02-23|2011-08-25|Ocz Technology Group, Inc.|Methods and systems utilizing nonvolatile memory in a computer system main memory| US9015441B2|2010-04-30|2015-04-21|Microsoft Technology Licensing, Llc|Memory usage scanning| KR20110131781A|2010-05-31|2011-12-07|삼성전자주식회사|Method for presuming accuracy of location information and apparatus for the same| US9032398B2|2010-07-12|2015-05-12|Vmware, Inc.|Online classification of memory pages based on activity level represented by one or more bits| JP2012027655A|2010-07-22|2012-02-09|Hitachi Ltd|Information processor and power-saving memory management method| US8762760B2|2010-09-14|2014-06-24|Xilinx, Inc.|Method and apparatus for adaptive power control in a multi-lane communication channel| US8838935B2|2010-09-24|2014-09-16|Intel Corporation|Apparatus, method, and system for implementing micro page tables| US8649212B2|2010-09-24|2014-02-11|Intel Corporation|Method, apparatus and system to determine access information for a phase change memory| US8612676B2|2010-12-22|2013-12-17|Intel Corporation|Two-level system main memory| US8462577B2|2011-03-18|2013-06-11|Intel Corporation|Single transistor driver for address lines in a phase change memory and switch array| US8462537B2|2011-03-21|2013-06-11|Intel Corporation|Method and apparatus to reset a phase change memory and switch memory cell| US8607089B2|2011-05-19|2013-12-10|Intel Corporation|Interface for storage device access over memory bus| US8605531B2|2011-06-20|2013-12-10|Intel Corporation|Fast verify for phase change memory with switch| US8463948B1|2011-07-01|2013-06-11|Intel Corporation|Method, apparatus and system for determining an identifier of a volume of memory| US8671309B2|2011-07-01|2014-03-11|Intel Corporation|Mechanism for advanced server machine check recovery and associated system software enhancements| CN103946810B|2011-09-30|2017-06-20|英特尔公司|The method and computer system of subregion in configuring non-volatile random access storage device| CN103946813B|2011-09-30|2017-08-25|英特尔公司|Generation based on the remote memory access signals followed the trail of using statistic| US9430372B2|2011-09-30|2016-08-30|Intel Corporation|Apparatus, method and system that stores bios in non-volatile random access memory| WO2013048500A1|2011-09-30|2013-04-04|Intel Corporation|Apparatus and method for implementing a multi-level memory hierarchy over common memory channels| CN103946819B|2011-09-30|2017-05-17|英特尔公司|Statistical wear leveling for non-volatile system memory| US9378133B2|2011-09-30|2016-06-28|Intel Corporation|Autonomous initialization of non-volatile random access memory in a computer system| WO2013048490A1|2011-09-30|2013-04-04|Intel Corporation|Non-volatile random access memory as a replacement for traditional mass storage| EP3451176A1|2011-09-30|2019-03-06|Intel Corporation|Apparatus and method for implementing a multi-level memory hierarchy having different operating modes| EP3364304A1|2011-09-30|2018-08-22|INTEL Corporation|Memory channel that supports near memory and far memory access| WO2013048497A1|2011-09-30|2013-04-04|Intel Corporation|Apparatus and method for implementing a multi-level memory hierarchy| US20130091331A1|2011-10-11|2013-04-11|Iulian Moraru|Methods, apparatus, and articles of manufacture to manage memory| US8593866B2|2011-11-11|2013-11-26|Sandisk Technologies Inc.|Systems and methods for operating multi-bank nonvolatile memory| WO2013089685A1|2011-12-13|2013-06-20|Intel Corporation|Enhanced system sleep state support in servers using non-volatile random access memory| US9336147B2|2012-06-12|2016-05-10|Microsoft Technology Licensing, Llc|Cache and memory allocation for virtual machines| US9811276B1|2015-09-24|2017-11-07|EMC IP Holding Company LLC|Archiving memory in memory centric architecture|WO2013101201A1|2011-12-30|2013-07-04|Intel Corporation|Home agent multi-level nvm memory architecture| US9477611B2|2013-10-21|2016-10-25|Marvell World Trade Ltd.|Final level cache system and corresponding methods| US9389675B2|2013-12-19|2016-07-12|International Business Machines Corporation|Power management for in-memory computer systems| US9965384B2|2014-08-15|2018-05-08|Mediatek Inc.|Method for managing multi-channel memory device to have improved channel switch response time and related memory control system| US9858201B2|2015-02-20|2018-01-02|Qualcomm Incorporated|Selective translation lookaside buffer search and page fault| US9658793B2|2015-02-20|2017-05-23|Qualcomm Incorporated|Adaptive mode translation lookaside buffer search and access fault| KR102314138B1|2015-03-05|2021-10-18|삼성전자 주식회사|Mobile Device and Method for Data Managing of Mobile Device| US10127406B2|2015-03-23|2018-11-13|Intel Corporation|Digital rights management playback glitch avoidance| CN105117285B|2015-09-09|2019-03-19|重庆大学|A kind of nonvolatile memory method for optimizing scheduling based on mobile virtual system| US20170068304A1|2015-09-09|2017-03-09|Mediatek Inc.|Low-power memory-access method and associated apparatus| US9977604B2|2015-10-16|2018-05-22|SK Hynix Inc.|Memory system| US9990283B2|2015-10-16|2018-06-05|SK Hynix Inc.|Memory system| US9990143B2|2015-10-16|2018-06-05|SK Hynix Inc.|Memory system| US9977605B2|2015-10-16|2018-05-22|SK Hynix Inc.|Memory system| US9977606B2|2015-10-16|2018-05-22|SK Hynix Inc.|Memory system| US9824419B2|2015-11-20|2017-11-21|International Business Machines Corporation|Automatically enabling a read-only cache in a language in which two arrays in two different variables may alias each other| US10303372B2|2015-12-01|2019-05-28|Samsung Electronics Co., Ltd.|Nonvolatile memory device and operation method thereof| US10387127B2|2016-07-19|2019-08-20|Sap Se|Detecting sequential access data and random access data for placement on hybrid main memory for in-memory databases| US10452539B2|2016-07-19|2019-10-22|Sap Se|Simulator for enterprise-scale simulations on hybrid main memory systems| US10540098B2|2016-07-19|2020-01-21|Sap Se|Workload-aware page management for in-memory databases in hybrid main memory systems| US10698732B2|2016-07-19|2020-06-30|Sap Se|Page ranking in operating system virtual pages in hybrid memory systems| US10474557B2|2016-07-19|2019-11-12|Sap Se|Source code profiling for line-level latency and energy consumption estimation| US10783146B2|2016-07-19|2020-09-22|Sap Se|Join operations in hybrid main memory systems| US10437798B2|2016-07-19|2019-10-08|Sap Se|Full system simulator and memory-aware splay tree for in-memory databases in hybrid memory systems| US10318428B2|2016-09-12|2019-06-11|Microsoft Technology Licensing, Llc|Power aware hash function for cache memory mapping| US10430085B2|2016-11-08|2019-10-01|Micron Technology, Inc.|Memory operations on data| US10261876B2|2016-11-08|2019-04-16|Micron Technology, Inc.|Memory management| US10241561B2|2017-06-13|2019-03-26|Microsoft Technology Licensing, Llc|Adaptive power down of intra-chip interconnect| US11010379B2|2017-08-15|2021-05-18|Sap Se|Increasing performance of in-memory databases using re-ordered query execution plans| EP3695319A4|2017-09-27|2020-12-30|Telefonaktiebolaget LM Ericsson |Method and reallocation component for managing reallocation of information from source to target memory sled| US11163680B2|2018-11-28|2021-11-02|International Business Machines Corporation|Dynamic write-back to non-volatile memory| US10642734B1|2018-12-03|2020-05-05|Advanced Micro Devices, Inc.|Non-power of two memory configuration| US20210072906A1|2019-09-11|2021-03-11|Redpine Signals, Inc.|System and Method for Flash and RAM allocation for Reduced Power Consumption in a Processor|
法律状态:
2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-10-08| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-05-04| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-05-25| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 22/12/2011, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 PCT/US2011/067007|WO2013095559A1|2011-12-22|2011-12-22|Power conservation by way of memory channel shutdown| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|